This paper studies audio-visual suppression for egocentric videos -- where the speaker is not captured in the video. Instead, potential noise sources are visible on screen with the camera emulating the off-screen speaker's view of the outside world. This setting is different from prior work in audio-visual speech enhancement that relies on lip and facial visuals. In this paper, we first demonstrate that egocentric visual information is helpful for noise suppression. We compare object recognition and action classification based visual feature extractors, and investigate methods to align audio and visual representations. Then, we examine different fusion strategies for the aligned features, and locations within the noise suppression model to incorporate visual information. Experiments demonstrate that visual features are most helpful when used to generate additive correction masks. Finally, in order to ensure that the visual features are discriminative with respect to different noise types, we introduce a multi-task learning framework that jointly optimizes audio-visual noise suppression and video based acoustic event detection. This proposed multi-task framework outperforms the audio only baseline on all metrics, including a 0.16 PESQ improvement. Extensive ablations reveal the improved performance of the proposed model with multiple active distractors, over all noise types and across different SNRs.
translated by 谷歌翻译
无监督域适应的最新进步已经表明,通过提取域不变表示来缓解域分流可以显着改善模型的概括到未标记的数据域。然而,现有方法未能有效保留私有的标签缺失域的表示,这可能会对概括产生不利影响。在本文中,我们提出了一种保留这种表示的方法,使得未标记域的潜在分布可以代表域不变的功能和私有到未标记域的各个特征。特别地,我们证明,在减轻域分歧的同时最大化未标记的域和其潜空间之间的相互信息可以实现这种保存。我们也理论上和经验验证的验证验证,保留私有到未标记的域的表示是重要的,并且是跨域泛化的必要性。我们的方法优于几个公共数据集上的最先进的方法。
translated by 谷歌翻译
Event cameras that asynchronously output low-latency event streams provide great opportunities for state estimation under challenging situations. Despite event-based visual odometry having been extensively studied in recent years, most of them are based on monocular and few research on stereo event vision. In this paper, we present ESVIO, the first event-based stereo visual-inertial odometry, which leverages the complementary advantages of event streams, standard images and inertial measurements. Our proposed pipeline achieves temporal tracking and instantaneous matching between consecutive stereo event streams, thereby obtaining robust state estimation. In addition, the motion compensation method is designed to emphasize the edge of scenes by warping each event to reference moments with IMU and ESVIO back-end. We validate that both ESIO (purely event-based) and ESVIO (event with image-aided) have superior performance compared with other image-based and event-based baseline methods on public and self-collected datasets. Furthermore, we use our pipeline to perform onboard quadrotor flights under low-light environments. A real-world large-scale experiment is also conducted to demonstrate long-term effectiveness. We highlight that this work is a real-time, accurate system that is aimed at robust state estimation under challenging environments.
translated by 谷歌翻译
Egocentric 3D human pose estimation with a single head-mounted fisheye camera has recently attracted attention due to its numerous applications in virtual and augmented reality. Existing methods still struggle in challenging poses where the human body is highly occluded or is closely interacting with the scene. To address this issue, we propose a scene-aware egocentric pose estimation method that guides the prediction of the egocentric pose with scene constraints. To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network. Next, we propose a scene-aware pose estimation network that projects the 2D image features and estimated depth map of the scene into a voxel space and regresses the 3D pose with a V2V network. The voxel-based feature representation provides the direct geometric connection between 2D image features and scene geometry, and further facilitates the V2V network to constrain the predicted pose based on the estimated scene geometry. To enable the training of the aforementioned networks, we also generated a synthetic dataset, called EgoGTA, and an in-the-wild dataset based on EgoPW, called EgoPW-Scene. The experimental results of our new evaluation sequences show that the predicted 3D egocentric poses are accurate and physically plausible in terms of human-scene interaction, demonstrating that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.
translated by 谷歌翻译
事件摄像机是运动激活的传感器,可捕获像素级照明的变化,而不是具有固定帧速率的强度图像。与标准摄像机相比,它可以在高速运动和高动态范围场景中提供可靠的视觉感知。但是,当相机和场景之间的相对运动受到限制时,例如在静态状态下,事件摄像机仅输出一点信息甚至噪音。尽管标准相机可以在大多数情况下,尤其是在良好的照明条件下提供丰富的感知信息。这两个相机完全是互补的。在本文中,我们提出了一种具有鲁棒性,高智能和实时优化的基于事件的视觉惯性镜(VIO)方法,具有事件角度,基于线的事件功能和基于点的图像功能。提出的方法旨在利用人为场景中的自然场景和基于线路的功能中的基于点的功能,以通过设计良好设计的功能管理提供更多其他结构或约束信息。公共基准数据集中的实验表明,与基于图像或基于事件的VIO相比,我们的方法可以实现卓越的性能。最后,我们使用我们的方法演示了机上闭环自动驾驶四极管飞行和大规模室外实验。评估的视频在我们的项目网站上介绍:https://b23.tv/oe3qm6j
translated by 谷歌翻译
本文重新讨论了一个非常简单但非常有效的计算范式,深度共同学习(DML)。我们观察到,有效性与其出色的概括质量高度相关。在本文中,我们从新的角度来解释了DML的性能改善,即这大约是贝叶斯后的采样程序。这也为应用R \'{e} nyi Divergence改善原始DML的基础建立了基础,因为它带来了先验的差异控制(在DML的上下文中)。因此,我们提出了r \'{e} nyi Divergence深度共同学习(RDML)。我们的经验结果代表了DML和\ renyi {}差异的婚姻的优势。R \'{E} nyi Divergence施加的灵活控制能够进一步改进DML,以学习更好的广义模型。
translated by 谷歌翻译
提出了基于可见光通信(VLC)的人类和机器人的合作定位火焰。根据实验系统,我们证明它具有很高的精度和实时性能。
translated by 谷歌翻译
尽管最近在开发动画全身化身方面取得了进展,但服装的现实建模(人类自我表达的核心方面之一)仍然是一个开放的挑战。最先进的物理模拟方法可以以交互速度产生现实行为的服装几何形状。但是,建模光真逼真的外观通常需要基于物理的渲染,这对于交互式应用来说太昂贵了。另一方面,数据驱动的深度外观模型能够有效地产生逼真的外观,但在合成高度动态服装的几何形状和处理具有挑战性的身体套构型方面挣扎。为此,我们通过对服装的明确建模介绍了姿势驱动的化身,这些化身表现出逼真的服装动力学和从现实世界数据中学到的逼真的外观。关键的想法是引入一个在显式几何形状之上运行的神经服装外观模型:在火车时,我们使用高保真跟踪,而在动画时期,我们依靠物理模拟的几何形状。我们的关键贡献是一个具有物理启发的外观网络,能够生成具有视图依赖性和动态阴影效果的影像逼真的外观,即使对于看不见的身体透明构型也是如此。我们对我们的模型进行了彻底的评估,并在几种受试者和不同类型的衣服上展示了不同的动画结果。与以前关于影迷全身化身的工作不同,我们的方法甚至可以为宽松的衣服产生更丰富的动力和更现实的变形。我们还证明,我们的配方自然允许服装与不同人的头像一起使用,同时保持完全动画,因此首次可以采用新颖的衣服来实现逼真的化身。
translated by 谷歌翻译
无标记的单眼3D人类运动捕获(MOCAP)与场景相互作用是一个充满挑战的研究主题,与扩展现实,机器人技术和虚拟头像生成有关。由于单眼环境的固有深度歧义,使用现有方法捕获的3D运动通常包含严重的人工制品,例如不正确的身体场景互穿,抖动和身体漂浮。为了解决这些问题,我们提出了HULC,这是一种新的3D人类MOCAP方法,它知道场景几何形状。 HULC估计3D姿势和密集的身体环境表面接触,以改善3D定位以及受试者的绝对尺度。此外,我们基于新的姿势歧管采样,引入了3D姿势轨迹优化,该采样解决了错误的身体环境互穿。尽管所提出的方法与现有场景感知的单眼MOCAP算法相比需要较少的结构化输入,但它会产生更加可行的姿势:HULC显着且一致地在各种实验和不同指标上都优于现有方法。项目页面:https://vcai.mpi-inf.mpg.de/projects/hulc/。
translated by 谷歌翻译
背景:Covid-19已成为全球挑战,并妥善规划医疗资源是打击Covid-19的关键。在美国退伍军人事务保健系统(VA)中,许多登记者易受Covid-19的影响。预测Covid-19迅速分配医疗资源成为一个关键问题。当VA登记者有Covid-19症状时,建议他们的第一步应该是调用VA呼叫中心。对于确认的Covid-19患者,从第一个症状到医院入院的中位时间为七天。通过预测Covid-19相关电话的数量,我们可以预测医疗保健使用和计划前方的迫在眉睫。目的:该研究旨在开发一种方法来预测110名VA医疗中心中的每一个的Covid-19相关电话的每日数量。方法:在该方法中,我们使用一组医疗中心预先训练模型,并为个别医疗中心进行微调。在群集级别,我们执行了功能选择,以选择更大的功能和自动超参数搜索,以选择模型的最佳超参数值组合。结论:本研究提出了一种准确的方法,预测VA医疗中心的每日Covid-19相关呼叫数量。该方法能够通过将类似的医疗中心分组成群组来克服建模挑战,以扩大培训模型的数据集,并使用超参数搜索自动查找模型的最佳超参数值组合。通过提出的方法,可以预先预测医疗保健的潮。这使得保健从业者能够更好地计划医疗资源和战斗Covid-19。
translated by 谷歌翻译